The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
A hallmark of the deep learning era for computer vision is the successful use of large-scale labeled datasets to train feature representations for tasks ranging from object recognition and semantic segmentation to optical flow estimation and novel view synthesis of 3D scenes. In this work, we aim to learn dense discriminative object representations for low-shot category recognition without requiring any category labels. To this end, we propose Deep Object Patch Encodings (DOPE), which can be trained from multiple views of object instances without any category or semantic object part labels. To train DOPE, we assume access to sparse depths, foreground masks and known cameras, to obtain pixel-level correspondences between views of an object, and use this to formulate a self-supervised learning task to learn discriminative object patches. We find that DOPE can directly be used for low-shot classification of novel categories using local-part matching, and is competitive with and outperforms supervised and self-supervised learning baselines. Code and data available at https://github.com/rehg-lab/dope_selfsup.
translated by 谷歌翻译
由于它们在现实世界中的广泛采用,提高深神经网络(DNN)的运行时性能至关重要。现有的优化DNN的张量代数表达的方法仅考虑由固定的预定义运算符表示的表达式,在一般表达式之间缺少可能的优化机会。我们提出了Ollie,这是第一个基于衍生的张量程序优化器。 Ollie通过利用一般张量代数表达式之间的转换来优化张量程序,从而实现了一个更大的表达搜索空间,其中包括由先前工作作为特殊情况支持的搜索空间。 Ollie使用基于混合衍生的优化器,该优化器有效地结合了探索性和指导性推导,以快速发现高度优化的表达式。对七个DNN的评估表明,Ollie可以在A100 GPU上胜过2.73 $ \ times $(平均为1.46美元$ \ times $),在V100上最多可超过2.68 $ \ times $(1.51 $ \ times $) GPU分别。
translated by 谷歌翻译
自我咬合对于布料操纵而具有挑战性,因为这使得很难估计布的全部状态。理想情况下,试图展开弄皱或折叠的布的机器人应该能够对布的遮挡区域进行推理。我们利用姿势估计的最新进展来构建一种使用明确的遮挡推理来展开皱巴布的系统的系统。具体来说,我们首先学习一个模型来重建布的网格。但是,由于布构型的复杂性以及遮挡的歧义,该模型可能会出现错误。我们的主要见解是,我们可以通过进行自我监督的损失进行测试时间填充来进一步完善预测的重建。获得的重建网格使我们能够在推理遮挡的同时使用基于网格的动力学模型来计划。我们在布料上和布料规范化上评估了系统,其目的是将布操作成典型的姿势。我们的实验表明,我们的方法显着优于未明确解释闭塞或执行测试时间优化的先验方法。可以在我们的$ \ href {https://sites.google.com/view/occlusion-reason/home/home} {\ text {project {project {project}}}上找到视频和可视化。
translated by 谷歌翻译
现有研究持续学习一系列任务,专注于处理灾难性遗忘,其中任务被认为是不同的,并且具有很少的共享知识。在任务相似并分享知识时,还有一些工作已经完成了将以前学到的新任务转移到新任务。据我们所知,没有提出任何技术来学习一系列混合类似和不同的任务,这些任务可以处理遗忘,并转发知识向前和向后转移。本文提出了这样的技术,用于在同一网络中学习两种类型的任务。对于不同的任务,该算法侧重于处理遗忘,并且对于类似的任务,该算法侧重于选择性地传送从一些类似先前任务中学到的知识来改善新的任务学习。此外,该算法自动检测新任务是否类似于任何先前的任务。使用混合任务序列进行实证评估,证明了所提出的模型的有效性。
translated by 谷歌翻译
作为公开交易公司的定期电话会议,盈利呼叫(EC)已被广泛地研究作为企业基本面的高分析价值,作为基本的市场指标。最近的深度学习技术的出现在创建自动化管道方面表现出很大的承诺,使EC支持的财务应用程序受益。然而,这些方法认为所有包含的内容都是信息,而无需从长文本的成绩单中炼制有价值的语义并遭受EC稀缺问题。同时,这些黑箱方法具有在提供人为可理解的解释方面具有固有的困难。为此,本文提出了一种基于多域变换器的反事实增强,命名为MTCA,以解决上述问题。具体而言,我们首先提出基于变压器的EC编码器,以术语地量化关键额型欧共事位议对市场推理的任务启发意义。然后,开发了一种多域反事实学习框架,以评估具有充满丰富的跨域文档的有限EC信息文本之后基于梯度的变体,使MTCA能够执行无监督的数据增强。作为奖励,我们发现一种使用非培训数据作为基于实例的解释,我们将结果与案例研究显示。对现实世界金融数据集的广泛实验证明了可解释的MTCA的有效性,以提高最先进的最新的挥发性评估能力14.2 \%的准确性。
translated by 谷歌翻译
标签昂贵,有时是不可靠的。嘈杂的标签学习,半监督学习和对比学习是三种不同的设计,用于设计需要更少的注释成本的学习过程。最近已经证明了半监督学习和对比学习,以改善使用嘈杂标签地址数据集的学习策略。尽管如此,这些领域之间的内部连接以及将它们的强度结合在一起的可能性仅开始出现。在本文中,我们探讨了融合它们的进一步方法和优势。具体而言,我们提出了CSSL,统一的对比半监督学习算法和Codim(对比DivideMix),一种用嘈杂标签学习的新算法。 CSSL利用经典半监督学习和对比学习技术的力量,并进一步适应了Codim,其从多种类型和标签噪声水平鲁莽地学习。我们表明Codim带来了一致的改进,并在多个基准上实现了最先进的结果。
translated by 谷歌翻译
由于布料的复杂动态,缺乏低维状态表示和自闭合,机器人操纵布的机器人操纵对机器人来说仍然具有挑战性。与以前的基于模型的基于模型的方法形成对比,用于学习基于像素的动态模型或压缩潜伏的潜在载体动态,我们建议从部分点云观察中学习基于粒子的动力学模型。为了克服部分可观察性的挑战,我们推出在底层布料网上连接的可见点。然后,我们通过此可见连接图来学习动态模型。与以往的基于学习的方法相比,我们的模型与其基于粒子的表示具有强烈的感应偏差,用于学习底层布理物理学;它不变于视觉功能;并且预测可以更容易地可视化。我们表明我们的方法极大地优于以前的最先进的模型和无模型加强学习方法在模拟中。此外,我们展示了零拍摄的SIM-to-Real Transfer,在那里我们部署了在法兰卡臂上的模拟中培训的模型,并表明该模型可以从弄皱的配置中成功平滑不同类型的布料。视频可以在我们的项目网站上找到。
translated by 谷歌翻译
持续的学习是遭受灾难性的遗忘,这是一个早期学识渊博的概念被遗忘的现象,以牺牲更新的样本。在这项工作中,我们挑战持续学习不可避免地与灾难性忘记相关的假设,通过展示一系列令人惊讶的是在不断学习时令人惊讶地没有灾难性的遗忘遗忘。我们提供了证据表明,这些重建类型任务表现出正向转移,并且单视网型重建随着时间的推移提高了学习和新型类别的性能。通过查看顺序学习任务的产出分配转移,我们提供了对知识转移能力的新颖分析。最后,我们表明这些任务的稳健性导致具有用于连续分类的代理代表学习任务的可能性。可以在https://github.com/rehg-lab/lrorec中找到与本文发布的CodeBase,DataSet和预训练模型。
translated by 谷歌翻译
We study the composition style in deep image matting, a notion that characterizes a data generation flow on how to exploit limited foregrounds and random backgrounds to form a training dataset. Prior art executes this flow in a completely random manner by simply going through the foreground pool or by optionally combining two foregrounds before foreground-background composition. In this work, we first show that naive foreground combination can be problematic and therefore derive an alternative formulation to reasonably combine foregrounds. Our second contribution is an observation that matting performance can benefit from a certain occurrence frequency of combined foregrounds and their associated source foregrounds during training. Inspired by this, we introduce a novel composition style that binds the source and combined foregrounds in a definite triplet. In addition, we also find that different orders of foreground combination lead to different foreground patterns, which further inspires a quadruplet-based composition style. Results under controlled experiments on four matting baselines show that our composition styles outperform existing ones and invite consistent performance improvement on both composited and real-world datasets. Code is available at: https://github.com/coconuthust/composition_styles
translated by 谷歌翻译